101 research outputs found
Progressive Neural Architecture Search
We propose a new method for learning the structure of convolutional neural
networks (CNNs) that is more efficient than recent state-of-the-art methods
based on reinforcement learning and evolutionary algorithms. Our approach uses
a sequential model-based optimization (SMBO) strategy, in which we search for
structures in order of increasing complexity, while simultaneously learning a
surrogate model to guide the search through structure space. Direct comparison
under the same search space shows that our method is up to 5 times more
efficient than the RL method of Zoph et al. (2018) in terms of number of models
evaluated, and 8 times faster in terms of total compute. The structures we
discover in this way achieve state of the art classification accuracies on
CIFAR-10 and ImageNet.Comment: To appear in ECCV 2018 as oral. The code and checkpoint for PNASNet-5
trained on ImageNet (both Mobile and Large) can now be downloaded from
https://github.com/tensorflow/models/tree/master/research/slim#Pretrained.
Also see https://github.com/chenxi116/PNASNet.TF for refactored and
simplified TensorFlow code; see https://github.com/chenxi116/PNASNet.pytorch
for exact conversion to PyTorc
Application of repeat-pass TerraSAR-X staring spotlight interferometric coherence to monitor pasture biophysical parameters: limitations and sensitivity analysis
This paper describes the potential and limitations of repeat-pass synthetic aperture radar interferometry (InSAR) to retrieve the biophysical parameters of intensively managed pastures. We used a time series of eight acquisitions from the TerraSAR-X Staring Spotlight (TSX-ST) mode. The ST mode is different from conventional Stripmap mode; therefore, we adjusted the Doppler phase correction for interferometric processing. We analyzed the three interferometric pairs with an 11-day temporal baseline, and among these three pairs found only one gives a high coherence. The results show that the high coherence in different paddocks is due to the cutting of the grass in the month of June, however the temporal decorrelation in other paddocks is mainly due to the grass growth and high sensitivity of the X-band SAR signals to the vegetation cover. The InSAR coherence (over coherent paddocks) shows a good correlation with SAR backscatter (R2dB=0.65, p<0.05) and grassland biophysical parameters (R2Height=0.55, p<0.05, R2Biomass=0.75,p<0.05). It is thus possible to detect different management practices (e.g., grazing, mowing/cutting) using SAR backscatter (dB) and coherence information from high spatial short baseline X-band imagery; however, the rate of decorrelation over vegetated areas is high. Initial findings from the June pair show the possibility of change detection due to the grass growth, grazing, and mowing events by using InSAR coherence information. However, it is not possible to automatically categorize different paddocks undergoing these changes based only on the SAR backscatter and coherence values, due to the ambiguity caused by tall grass flattened by the wind
Comparison of ASCA and ROSAT Cluster Temperatures: A2256, A3558 and AWM7
We address the consistency between ASCA and ROSAT spatially-resolved cluster
temperature measurements, which is of significant interest given the recent
ASCA reports of temperature gradients in several hot clusters. We reanalyze
ROSAT PSPC data on A2256 (originally analyzed by Briel & Henry) using the newer
calibration and a technique less sensitive to the calibration uncertainties,
and find a temperature decline with radius in good agreement with ASCA's. We
also present ASCA temperature maps and radial profiles of A3558 and AWM7 and
compare them to the published ROSAT results. In A3558, we detect an asymmetric
temperature pattern and a slight radial decline. We do not find any significant
temperature variations in AWM7, except around the cD galaxy. Radial temperature
profiles of these two clusters are in a qualitative agreement with ROSAT.
However, while their ASCA average temperatures agree with other high-energy
instruments, ROSAT temperatures are lower by factors of 1.7 and 1.25,
respectively. We find that including realistic estimates of the current ROSAT
systematic uncertainties enlarges the temperature confidence intervals so that
ROSAT measurements are consistent with others for these clusters as well. Due
to the limited energy coverage of ROSAT PSPC, its results for the hotter
clusters are highly sensitive to calibration uncertainties. We conclude that at
the present calibration accuracy, there is no disagreement between ASCA and
other instruments. On the scientific side, a ROSAT temperature underestimate
for A3558 may be responsible for the anomalously high gas to total mass
fraction found by Bardelli et al in this system.Comment: SIS data added for A3558 which increased significance of the
temperature structure. Accepted for ApJ. Latex, 8 pages, 4 figure
Оценка эффективности фармакотерапии и приверженности к лечению у пациентов с бронхиальной астмой с помощью онлайн-опросников
АСТМА БРОНХИАЛЬНАЯ /ЛЕК ТЕРЛЕГКИХ БОЛЕЗНИ ОБСТРУКТИВНЫЕКОМПЛАЕНС БОЛЬНОГО С РЕЖИМОМ ЛЕЧЕНИЯБОЛЬНОГО СОГЛАСИЕ С РЕЖИМОМ ЛЕЧЕНИЯБОЛЬНОГО ПОДЧИНЕНИЕ РЕЖИМУЛЕКАРСТВЕННАЯ ТЕРАПИЯФАРМАКОТЕРАПИЯАНКЕТИРОВАНИЕОНЛАЙН-ОПРОС
Simple Open-Vocabulary Object Detection with Vision Transformers
Combining simple architectures with large-scale pre-training has led to
massive improvements in image classification. For object detection,
pre-training and scaling approaches are less well established, especially in
the long-tailed and open-vocabulary setting, where training data is relatively
scarce. In this paper, we propose a strong recipe for transferring image-text
models to open-vocabulary object detection. We use a standard Vision
Transformer architecture with minimal modifications, contrastive image-text
pre-training, and end-to-end detection fine-tuning. Our analysis of the scaling
properties of this setup shows that increasing image-level pre-training and
model size yield consistent improvements on the downstream detection task. We
provide the adaptation strategies and regularizations needed to attain very
strong performance on zero-shot text-conditioned and one-shot image-conditioned
object detection. Code and models are available on GitHub.Comment: ECCV 2022 camera-ready versio
- …